-
-
Notifications
You must be signed in to change notification settings - Fork 636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add logging for V2 test scheduler as temporary workaround until V2 UI is finished #7729
Add logging for V2 test scheduler as temporary workaround until V2 UI is finished #7729
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open questions:
- Do we like the output shown in the PR description?
- Will this support config logging levels? I haven't looked into this yet.
- The coordinator tests now fail because their stdout has the logging information. Should we teach the tests to be okay with logs, or turn off the logs in those tests somehow
pants/tests/python/pants_test/rules/test_test.py
Lines 105 to 122 in 31f4c73
def test_coordinator_python_test(self): target_adaptor = PythonTestsAdaptor(type_alias='python_tests') result = run_rule(coordinator_of_tests, HydratedTarget(Address.parse("some/target"), target_adaptor, ()), { (PyTestResult, HydratedTarget): lambda _: PyTestResult(status=Status.FAILURE, stdout='foo', stderr=''), }) self.assertEqual(result, TestResult(status=Status.FAILURE, stdout='foo', stderr='')) def test_coordinator_unknown_test(self): target_adaptor = PythonTestsAdaptor(type_alias='unknown_tests') with self.assertRaises(Exception) as cm: run_rule(coordinator_of_tests, HydratedTarget(Address.parse("some/target"), target_adaptor, ()), { (PyTestResult, HydratedTarget): lambda _: PyTestResult(status=Status.FAILURE, stdout='foo', stderr=''), }) self.assertEqual(str(cm.exception), "Didn't know how to run tests for type unknown_tests")
cc @stuhood
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, just nits.
Yes, it does hook up properly.
@stuhood how do you recommend approaching this? Usually we would use Should we just modify the tests and accept that they will now be a little more brittle? Or change from |
If those tests were extending In the case of an integration test, either disabling logging via |
Oops, turns out they did indeed extend Thanks for the help! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I'm undecided about the format of this. Particularly, I'd (personally, I can be wrong) love for successful test runs to have as little output as possible, eventually something very close to:
Basically, I usually don't care about the tests that have succeeded, just the ones that have not. If we log at the So, my proposal:
So, I'd argue in favour of moving this to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But also it's a matter of opinion, I'm fine with the format and the code changes in general :)
I think that's a great suggestion! Really we only care about this logging when using our But you're right that in general people won't care when the test started or finished, especially when locally running one or two targets. |
Hm, on second thought, I'm not as certain anymore. Running with
All we're after is the logs about when tests start and when they finish, so that @illicitonion @stuhood @cosmicexplorer what do you all recommend? |
@blorente : With pantsd, the log message only triggers on the first run in a session, and then again when something has been invalidated. |
@Eric-Arellano : IMO, leave it at info. We should be biasing toward the behavior with pantsd. |
I just had a bunch of discussion with @blorente about this... I think this is a reasonable short-term hack, but I have some different ideas for the long term. I think that there are two use-cases for this logging:
There are two "modes" of pants running: The v2 UI is meant to solve both 1 and 2, for case a. I think the solution I'd like to see long term is something along the lines of: Until the v2 UI lands, I'm happy to add this logging, but I think it's an until-v2-ui hack rather than the right long-term solution... I'm cool with this being at info level, though I may remove the logging for the success cases. |
Great feedback and plan. Thank you for that. I linked to it in #6004 + a TODO comment. Stu and I discussed that we're going to keep the success logging for now to avoid Travis timeouts and for the scenario of running a bunch of tests, but then the Pants process terminating early (either control-c or timeout), and still wanting to be able to get some useful information about the results. |
We want to add logging to
./pants test
when using V2 both as a matter of UI and to avoid Travis timeouts of not receiving input for 10+ minutes.Eventually, we should be doing this via a
Logging
singleton, similar to theConsole
singleton, per #6004. This acts as a temporary workaround in the meantime.Result